Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Fuzzy clustering algorithm based on belief subcluster cutting
Yu DING, Hanlin ZHANG, Rong LUO, Hua MENG
Journal of Computer Applications    2024, 44 (4): 1128-1138.   DOI: 10.11772/j.issn.1001-9081.2023050610
Abstract44)   HTML3)    PDF (4644KB)(22)       Save

Belief Peaks Clustering (BPC) algorithm is a new variant of Density Peaks Clustering (DPC) algorithm based on fuzzy perspective. It uses fuzzy mathematics to describe the distribution characteristics and correlation of data. However, BPC algorithm mainly relies on the information of local data points in the calculation of belief values, instead of investigating the distribution and structure of the whole dataset. Moreover, the robustness of the original allocation strategy is weak. To solve these problems, a fuzzy Clustering algorithm based on Belief Subcluster Cutting (BSCC) was proposed by combining belief peaks and spectral method. Firstly, the dataset was divided into many high-purity subclusters by local belief information. Then, the subcluster was regarded as a new sample, and the spectral method was used for cutting graph clustering through the similarity relationship between clusters, thus coupling local information and global information. Finally, the points in the subcluster were assigned to the class cluster where the subcluster was located to complete the final clustering. Compared with BPC algorithm, BSCC has obvious advantages on datasets with multiple subclusters, and it has the ACCuracy (ACC) improvement of 16.38 and 21.35 percentage points on americanflag dataset and Car dataset, respectively. Clustering experimental results on synthetic datasets and real datasets show that BSCC outperforms BPC and the other seven clustering algorithms on the three evaluation indicators of Adjusted Rand Index (ARI), Normalized Mutual Information (NMI) and ACC.

Table and Figures | Reference | Related Articles | Metrics
Statistically significant sequential patterns mining algorithm under influence degree
Jun WU, Aijia OUYANG, Lin ZHANG
Journal of Computer Applications    2022, 42 (9): 2713-2721.   DOI: 10.11772/j.issn.1001-9081.2021071311
Abstract297)   HTML4)    PDF (1068KB)(103)       Save

Aiming at the problems that the degree of support is not a good indicator for the interestingness of sequential patterns and the quality of reported sequential patterns is not evaluated in traditional sequential patterns mining algorithms, a statistically significant sequential patterns mining algorithm under influence degree, calling ISSPM (Influence-based Significant Sequential Patterns Mining), was proposed. Firstly, all sequential patterns meeting the interestingness constraint were mined recursively. Then, the itemset permuting method was introduced to construct permutation test null distribution for these sequential patterns. Finally, the statistical measures of the evaluated sequential patterns were calculated from this distribution, and all statistically significant sequential patterns were found from the above sequential patterns. In the experiments with the PSPM (Prefix-projected Sequential Patterns Mining), SPDL (Sequential Patterns Discovering under Leverage) and PSDSP (Permutation Strategies for Discovering Sequential Patterns) algorithms on the real-world sequential record datasets, ISSPM algorithm reports fewer but more interesting sequential patterns. Experimental results on the synthetic sequential record datasets show that the average proportion of the false positive sequential patterns reported by the ISSPM algorithm is 3.39%, and the discovery rate of embedded patterns of this algorithm is not less than 66.7%, which are significantly better than those of the above three algorithms to compare. It can be seen that the statistically significant sequential patterns reported by ISSPM algorithm can reflect more valuable information in sequential record datasets, and the decisions made based on the information are more reliable.

Table and Figures | Reference | Related Articles | Metrics
News recommendation model with deep feature fusion injecting attention mechanism
Yuxi LIU, Yuqi LIU, Zonglin ZHANG, Zhihua WEI, Ran MIAO
Journal of Computer Applications    2022, 42 (2): 426-432.   DOI: 10.11772/j.issn.1001-9081.2021050907
Abstract582)   HTML55)    PDF (755KB)(256)       Save

When mining news features and user features, the existing news recommendation models often lack comprehensiveness since they often fail to consider the relationship between the browsed news, the change of time series, and the importance of different news to users. At the same time, the existing models also have shortcomings in more fine-grained content feature mining. Therefore, a news recommendation model with deep feature fusion injecting attention mechanism was constructed, which can comprehensively and non-redundantly conduct user characterization and extract the features of more fine-grained news fragments. Firstly, a deep learning-based method was used to deeply extract the feature matrix of news text through the Convolutional Neural Network (CNN) injecting attention mechanism. By adding time series prediction to the news that users had browsed and injecting multi-head self-attention mechanism, the interest characteristics of users were extracted. Finally, a real Chinese dataset and English dataset were used to carry out experiments with convergence time, Mean Reciprocal Rank (MRR) and normalized Discounted Cumulative Gain (nDCG) as indicators. Compared with Neural news Recommendation with Multi-head Self-attention (NRMS) and other models, on the Chinese dataset, the proposed model has the average improvement rate of nDCG from -0.22% to 4.91% and MRR from -0.82% to 3.48%. Compared with the only model with negative improvement rate, the proposed model has the convergence time reduced by 7.63%. on the English dataset, the proposed model has the improvement rates reached 0.07% to 1.75% and 0.03% to 1.30% respectively on nDCG and MRR; At the same time this model always has fast convergence speed. Results of ablation experiments show that adding attention mechanism and time series prediction module is effective.

Table and Figures | Reference | Related Articles | Metrics
Fake news detection method based on blockchain technology
Shengjia GONG, Linlin ZHANG, Kai ZHAO, Juntao LIU, Han YANG
Journal of Computer Applications    2022, 42 (11): 3458-3464.   DOI: 10.11772/j.issn.1001-9081.2021111885
Abstract409)   HTML13)    PDF (1557KB)(172)       Save

Fake news not only leads to misconceptions and damages people's right to know the truth, but also reduces the credibility of news websites. In view of the occurrence of fake news in news websites, a fake news detection method based on blockchain technology was proposed. Firstly, the smart contract was invoked to randomly assign reviewers for the news for determining the authenticity of the news. Then, the credibility of the review results was improved by adjusting the number of reviewers and ensuring the number of effective reviewers. At the same time, the incentive mechanism was designed with rewards distributed according to the reviewers' behaviors, and the reviewers' behaviors and rewards were analyzed by game theory. In order to gain the maximum benefit, the reviewers' behaviors should be honest. An auditing mechanism was designed to detect malicious reviewers to improve system security. Finally, a simple blockchain fake news detection system was implemented by using Ethereum smart contract and simulated for fake news detection, and the results show that the accuracy of news authenticity detection of the proposed method reaches 95%, indicating that the proposed method can effectively prevent the release of fake news.

Table and Figures | Reference | Related Articles | Metrics
Improved algorithm for no-reference quality assessment of blurred image
LI Honglin ZHANG Qi YANG Dawei
Journal of Computer Applications    2014, 34 (3): 797-800.   DOI: 10.11772/j.issn.1001-9081.2014.03.0797
Abstract783)      PDF (629KB)(776)       Save

A fast and effective quality assessment algorithm of no-reference blurred image based on improving the classic Repeat blur (Reblur) processing algorithm was proposed for the high computational cost in traditional methods. The proposed algorithm took into account the human visual system, selected the image blocks that human was interested in instead of the entire image using the local variance, constructed blurred image blocks through low-pass filter, calculated the difference of the adjacent pixels between the original and the blurred image blocks to obtain the original image objective quality evaluation parameters. The simulation results show that compared to the traditional method, the proposed algorithm is more consistent with the subjective evaluation results with the Pearson correlation coefficient increasing 0.01 and less complex with half running time.

Related Articles | Metrics
Optimization of intra-MAP handover in HMIPv6
SUN Xiaolin ZHANG Jianyang JIA Xiao
Journal of Computer Applications    2014, 34 (2): 338-340.  
Abstract460)      PDF (450KB)(482)       Save
In the pointer forwarding schemes of Hierarchical Mobile IPv6 (HMIPv6), the influence of the distance between the Access Routers (ARs) on the handover performance has not been taken into consideration. To solve this problem, the optimization of intra-MAP (Mobile Anchor Point) handover in HMIPv6 based on pointer forwarding (OPF-HMIPv6) was proposed. The OPF-HMIPv6 compared the distance between ARs with the distance between AR and MAP firstly and gave priority to registering to MAP, rather than built a pointer chain immediately by registering to AR. The simulation results have shown that OPF-HMIPv6 can decrease the registration cost by 39% compared to HMIPv6 when the distance between AR and MAP is greater than the distance between ARs, which proves that the optimization reduces the overhead caused by the binding update and improves the efficiency of the intra-MAP handover.
Related Articles | Metrics
Dynamic community discovery algorithm of Web services based on collaborative filtering
Zhong WU Gui-hua NIE CHEN Dong-lin ZHANG Peilu
Journal of Computer Applications    2013, 33 (08): 2095-2099.   DOI: 10.11772/j.issn.1001-9081.2013.08.2095
Abstract793)      PDF (782KB)(644)       Save
To cope with the low accuracy of the mining results in the existing community discovery algorithms and the low quality of intelligent recommendation in the Web services resource, on the basis of the conventional collaborative filtering algorithms, a dynamic community discovery algorithm was proposed based on the nodes' similarity. Firstly, the central node that had the most connected nodes was regarded as the initial network community, and the community contribution degree was taken as the metric to continuously form a plurality of global saturated contribution degree communities. Then, an overlapping calculation was used to merge the communities of high similarity. Finally, the calculated results were arranged in descending order to form neighboring user sets for obtaining community recommendation object by calculating the dynamic similarity between target user and other users in the community. The experimental results show that the user social network community classification by the proposed community discovery algorithms is consistent with the real community classification results. The proposed algorithm can improve the accuracy of the community mining and helps to achieve high-quality community recommendation.
Reference | Related Articles | Metrics
Dynamic demodulation of fiber Bragg grating sensing based on LabVIEW
HU Liaolin ZHANG Weichao HUA Dengxin WANG Li DI Huige
Journal of Computer Applications    2013, 33 (05): 1473-1475.   DOI: 10.3724/SP.J.1087.2013.01473
Abstract863)      PDF (488KB)(671)       Save
A dynamic Fiber Bragg Grating(FBG) sensing demodulation system was designed by using tunable optical filter based on LabVIEW. With the use of Visual Instrument (VI) program of LabVIEW software, tunable filter was controlled by a computer through hardware interface to scan the fixed wavelength range. Photoelectric signal was collected by acquisition circuits and sent to the computer to be analyzed, so reflection wavelength and FBG's strain could be obtained. Results of dynamic sensing demodulation experiments were identical with the results from spectrum analyzer, which has verified the correctness and feasibility of the designed scheme.
Reference | Related Articles | Metrics
Adaptive variable step-size blind source separation algorithm based on nonlinear principal component analysis
GU Fanglin ZHANG Hang LI Lunhui
Journal of Computer Applications    2013, 33 (05): 1233-1236.   DOI: 10.3724/SP.J.1087.2013.01233
Abstract756)      PDF (591KB)(674)       Save
The design of the step-size is crucial to the convergence rate of the Nonlinear Principle Component Analysis (NPCA) algorithm. However, the commonly used fixed step-size algorithm can hardly satisfy the convergence speed and estimation precision requirements simultaneously. To address this issue, the gradient-based adaptive step-size NPCA algorithm and optimal step-size NPCA algorithm were proposed to speed up the convergence rate and improve tracking ability. In particular, the optimal step-size NPCA algorithm linearly approximated the contrast function and figured out the optimal step-size currently. The optimal step-size NPCA algorithm utilized an adaptive step-size whose value was adjusted in sympathy with the value of the contrast function and free from any manual parameters. The simulation results show that the proposed adaptive step-size NPCA algorithms have faster convergence rate or better tracking ability in comparison with the fixed step-size NPCA algorithm when the estimation precisions are same. The convergence performance of the optimal step-size NPCA algorithm is superior to that of the gradient-based adaptive NPCA algorithm.
Reference | Related Articles | Metrics
Application of cultural algorithm in cross-docking scheduling
MAO Daoxiao XU Kelin ZHANG Zhiying
Journal of Computer Applications    2013, 33 (04): 980-983.   DOI: 10.3724/SP.J.1087.2013.00980
Abstract993)      PDF (652KB)(420)       Save
This paper studied on the operational scheduling problem in a cross-docking center of a single receiving and a single shipping door with finite temporary storage. A dynamic programming model was built with the objective to minimize the costs including additional handing, temporary storage and truck replacement cost. A cultural algorithm with two layer evolutionary mechanism was proposed to solve the problem. The evolution of population space adopted genetic algorithm, and the belief space received good individual from population space to form knowledge which was used to guide evolution in turn. Numerical experiments under small and big scale situations prove the validity of proposed cultural algorithm.
Reference | Related Articles | Metrics
Delay tolerant network routing resource allocation model on general k-anycast
ZHANG Yong-hui LIN Zhang-xi LIU Jian-hua LIANG Quan
Journal of Computer Applications    2012, 32 (12): 3494-3498.   DOI: 10.3724/SP.J.1087.2012.03494
Abstract747)      PDF (995KB)(446)       Save
Delay Tolerant Network (DTN) can modify frequent network disruption and segmentation in mobile Internet access. Its core technology includes routing resource allocation. However, the existed knowledge oracles of DTN routing algorithms are time probabilistic uncertainty in public transport means mobile network or logistics, which reduces the efficiency of resource allocation. So it was proposed that general k-anycast allocates bandwidth resources to k eligible access routers in access period, which diversify the time deviation degrees and decrease the uncertainty. And access router information matrixes decide general k-anycast router aggregation. Therefore packets can be transmitted simultaneously to multiple destinations. The probabilistic uncertain utility model was further proposed for routing resource allocation based on DTN custody transfer. Simulations show that its transmission performance and robustness are better than Multicast DTN routing algorithm.
Related Articles | Metrics
Lower energy adaptive clustering hierarchy routing protocol for wireless sensor network
LI Ling WANG Lin ZHANG Fei-ge WANG Xiao-zhe
Journal of Computer Applications    2012, 32 (10): 2700-2703.   DOI: 10.3724/SP.J.1087.2012.02700
Abstract963)      PDF (635KB)(477)       Save
Lower Energy Adaptive Clustering Hierarchy (LEACH) protocol randomly and circularly selects the cluster-head node and evenly distributes network energy consumption to each sensor node, but it does not consider the remaining energy of each node. In order to avoid premature death of the less energy node that was selected as the cluster-head node, an advanced algorithm named LEACH-New was proposed,which was based on the energy probability to select those nodes with more energy as cluster-head and to determine the optimal number of the cluster-head nodes. The cluster-head node collected, fused, then sent the data to the base station by the combined mode of single-hop and multi-hop. This algorithm resolved the problem that less energy node was selected to be cluster-head and cluster-heads energy overloaded in LEACH protocol, so it can prolong the lifetime of whole network. The simulation results show that the improved algorithm effectively reduces the network energy consumption and ensure network load balance.
Reference | Related Articles | Metrics
Multiple-layer classification of Web emergency news based on rules and statistics
XIA Hua-Lin ZHANG Yang-sen
Journal of Computer Applications    2012, 32 (02): 392-415.   DOI: 10.3724/SP.J.1087.2012.00392
Abstract1143)      PDF (616KB)(513)       Save
The Web news grows in index tendency and disseminates rapidly, and the Web emergency news widely spreads on the Internet. While the traditional text classification is of low accuracy and efficiency, it is difficult to locate the emergency news and information of specific topics. The paper proposed a multiple-layer classification method for Web emergency news based on the rules and statistics. First, it extracted category keywords to form the library of rules. Second, the emergencies would be classified into four major categories by the rules, and then these major categories would be classified into small categories by the Bayesian classification method, thus a two-tier classification model based on rules and statistics was established. The experimental results show that the classification accuracy rate and the recall rate have reached over 90%, and the classification efficiency is generally higher than the traditional classification methods.
Reference | Related Articles | Metrics
TDMA scheduling algorithm for multi-sink wireless sensor networks
LI Hai-ping MAO Jian-lin ZHANG Bin CHEN Bo
Journal of Computer Applications    2012, 32 (02): 363-366.   DOI: 10.3724/SP.J.1087.2012.00363
Abstract1469)      PDF (661KB)(345)       Save
Concerning the high packet delay and frequent transmission bottleneck in Wireless Sensor Network (WSN) with one-sink node, a multi-sink wireless sensor network model and its Time Division Multiple Access (TDMA) scheduling algorithm based on Genetic Algorithm (GA) were proposed. The algorithm divided the whole sensor network into some small sensor networks according to the number and position of the sink nodes, and adopted GA to optimize the slot allocation result. The simulation results show that, the TDMA time slot allocation method based on genetic algorithm is better in the length of time slot allocation frame, the average of packet delay and the average energy consumption than that of graph coloring algorithm.
Reference | Related Articles | Metrics
Bump feature extraction based on attributed adjacency graph in reverse engineering
SONG Liang-hao LIU Guang-shuai LI Bai-lin ZHANG Li
Journal of Computer Applications    2011, 31 (11): 3031-3034.   DOI: 10.3724/SP.J.1087.2011.03031
Abstract1346)      PDF (779KB)(448)       Save
Combined feature extraction in Reverse Engineering (RE) is beneficial to improving the whole quality of reconstruction model and reflecting the original design intention, but currently reletated research is not intensive. In order to extract bump feature which belongs to a simple combined feature from point cloud data in reverse engineering, a method based on Attribute Adjacency Graph (AAG) for extraction bump feature was proposed. Firstly, the algorithm based on AAG decomposition was used to recognize stock feature. Then, the parameters of stock feature were extracted and the type of stock feature was distinguished. The experimental results show that the method is effective and direct to extract different types of stock features.
Related Articles | Metrics
Design principle of Java-In-A-Box and its application to common fundamental module in point of sale
LI Gui-lin ZHANG Wei-da
Journal of Computer Applications    2011, 31 (03): 831-833.   DOI: 10.3724/SP.J.1087.2011.00831
Abstract1411)      PDF (584KB)(904)       Save
A design method named Java-In-A-Box (JIAB) was proposed to make the Android suitable for developing large scale programs. By re-encapsulating the inner components of Android platform, the responsibility boundaries among user interface, business logic and data storage were clear. Based on JIAB, the common fundamental module for an embedded Point of Sale (POS) was designed and implemented. It has been found that the JIAB method is suitable for developing large scale applications on Android platform.
Related Articles | Metrics
Using mixed window function and subband spectrum centroid in MFCC feature extraction process
Huan Zhao Lin Zhang
Journal of Computer Applications   
Abstract1472)      PDF (436KB)(1326)       Save
In order to improve the quality of speech in low SNR, two methods were proposed to improve the robustness of the system in this paper based on the traditional MFCC feature extraction. One is to use the side lobe suppression of mixed window function to improve the robustness of system; the other is to incorporate subband amplitude information with Mel-subband spectrum centroid(MSSC) because spectral peak position remains practically unaffected in the presence of background noise. Experimental results show that mixed window function and MSSC and their combination system could improve the robustness of system compared to the benchmark system based on traditional MFCC in the low SNR of stationary noises.
Related Articles | Metrics
A data packing method of statistic judge based on rough sets in incomplete information system
ZhongLin Zhang
Journal of Computer Applications   
Abstract1866)      PDF (551KB)(958)       Save
This paper brought forward a data packing method of incomplete information system based on rough sets and grey system theory. This method takes advantage of the lower approximation in rough sets to do the first data packing, and then, according to the value-taking probability of the attribute value, finds the result to do the second packing, thus accomplishes the completion of incomplete information system. This method can adequately reflect the rules and avoid the conflict in information system. When the data in information system and the lost data are evenly distributed, the packing data can reflect the true situation of information system.
Related Articles | Metrics